荧光显微镜是一直是观察胚胎(体内)生长的长期成像随时间的重要工具。然而,累积暴露是对如此敏感的实时样本的光毒性。虽然像光片荧光显微镜(LSFM)这样的技术允许减少曝光,但它不太适用于深度成像模型。其他计算技术是计算昂贵的并且通常缺乏恢复质量。为了解决这一挑战,可以使用各种低剂量成像技术来实现使用轴向(Z轴)的少量切片实现3D体积重建;但是,它们通常缺乏恢复质量。而且,在轴向上获取致密图像(具有小步骤)是计算昂贵的。为了解决这一挑战,我们介绍了一种基于压缩的感测(CS)方法来完全重建具有相同信噪比(SNR)的3D卷,其具有小于励磁剂量的一半。我们展示了该理论并通过实验验证了这种方法。为了证明我们的技术,我们在斑马鱼胚脊髓(30um厚度)中捕获RFP标记神经元的3D体积,使用共聚焦显微镜轴向采样0.1um。从结果中,我们观察到基于CS的方法从整个堆叠光学部分的小于20%的高于20%实现精确的3D体积重建。在该工作中的开发的基于CS的方法可以容易地应用于其他深度成像模态,例如双光子和光板显微镜,其中还原样品毒性是一个关键挑战。
translated by 谷歌翻译
Science tests competing theories or models by evaluating the similarity of their predictions against observational experience. Thus, how we measure similarity fundamentally determines what we learn. In machine learning and scientific modeling, similarity metrics are used as objective functions. A classic example being mean squared error, which is the optimal measure of similarity when errors are normally distributed and independent and identically distributed (iid). In many cases, however, the error distribution is neither normal nor iid, so it is left to the scientist to determine an appropriate objective. Here, we review how information theory can guide that selection, then demonstrate the approach with a simple hydrologic model.
translated by 谷歌翻译
As Artificial and Robotic Systems are increasingly deployed and relied upon for real-world applications, it is important that they exhibit the ability to continually learn and adapt in dynamically-changing environments, becoming Lifelong Learning Machines. Continual/lifelong learning (LL) involves minimizing catastrophic forgetting of old tasks while maximizing a model's capability to learn new tasks. This paper addresses the challenging lifelong reinforcement learning (L2RL) setting. Pushing the state-of-the-art forward in L2RL and making L2RL useful for practical applications requires more than developing individual L2RL algorithms; it requires making progress at the systems-level, especially research into the non-trivial problem of how to integrate multiple L2RL algorithms into a common framework. In this paper, we introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components (each addressing different aspects of the lifelong learning problem) into a unified system. As an instantiation of L2RLCF, we develop a standard API allowing easy integration of novel lifelong learning components. We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system. We also introduce an evaluation environment in order to measure the effect of combining various system components. Our evaluation environment employs different LL scenarios (sequences of tasks) consisting of Starcraft-2 minigames and allows for the fair, comprehensive, and quantitative comparison of different combinations of components within a challenging common evaluation environment.
translated by 谷歌翻译
Acquiring a better understanding of drought impacts becomes increasingly vital under a warming climate. Traditional drought indices describe mainly biophysical variables and not impacts on social, economic, and environmental systems. We utilized natural language processing and bidirectional encoder representation from Transformers (BERT) based transfer learning to fine-tune the model on the data from the news-based Drought Impact Report (DIR) and then apply it to recognize seven types of drought impacts based on the filtered Twitter data from the United States. Our model achieved a satisfying macro-F1 score of 0.89 on the DIR test set. The model was then applied to California tweets and validated with keyword-based labels. The macro-F1 score was 0.58. However, due to the limitation of keywords, we also spot-checked tweets with controversial labels. 83.5% of BERT labels were correct compared to the keyword labels. Overall, the fine-tuned BERT-based recognizer provided proper predictions and valuable information on drought impacts. The interpretation and analysis of the model were consistent with experiential domain expertise.
translated by 谷歌翻译
Point-of-Care Ultrasound (POCUS) refers to clinician-performed and interpreted ultrasonography at the patient's bedside. Interpreting these images requires a high level of expertise, which may not be available during emergencies. In this paper, we support POCUS by developing classifiers that can aid medical professionals by diagnosing whether or not a patient has pneumothorax. We decomposed the task into multiple steps, using YOLOv4 to extract relevant regions of the video and a 3D sparse coding model to represent video features. Given the difficulty in acquiring positive training videos, we trained a small-data classifier with a maximum of 15 positive and 32 negative examples. To counteract this limitation, we leveraged subject matter expert (SME) knowledge to limit the hypothesis space, thus reducing the cost of data collection. We present results using two lung ultrasound datasets and demonstrate that our model is capable of achieving performance on par with SMEs in pneumothorax identification. We then developed an iOS application that runs our full system in less than 4 seconds on an iPad Pro, and less than 8 seconds on an iPhone 13 Pro, labeling key regions in the lung sonogram to provide interpretable diagnoses.
translated by 谷歌翻译
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
In recent years, deep learning has infiltrated every field it has touched, reducing the need for specialist knowledge and automating the process of knowledge discovery from data. This review argues that astronomy is no different, and that we are currently in the midst of a deep learning revolution that is transforming the way we do astronomy. We trace the history of astronomical connectionism from the early days of multilayer perceptrons, through the second wave of convolutional and recurrent neural networks, to the current third wave of self-supervised and unsupervised deep learning. We then predict that we will soon enter a fourth wave of astronomical connectionism, in which finetuned versions of an all-encompassing 'foundation' model will replace expertly crafted deep learning models. We argue that such a model can only be brought about through a symbiotic relationship between astronomy and connectionism, whereby astronomy provides high quality multimodal data to train the foundation model, and in turn the foundation model is used to advance astronomical research.
translated by 谷歌翻译
赤道等离子体气泡(EPB)是低密度血浆的羽毛,它们从F层的底部升至Exosphere。 EPB是无线电波闪烁的已知原因,可以降低与航天器的通信。我们构建了一个随机的森林回归剂,以预测和预测IBI处理器在船上检测到的EPB [0-1]的可能性。我们使用从2014年到2021年的8年群数据,并将数据从时间序列转换为5维空间,该空间包括纬度,经度,MLT,年份和年度。我们还增加了KP,F10.7厘米和太阳风速。关于地理位置,当地时间,季节和太阳活动的EPB的观察主要与现有工作一致,而链接的地磁活动尚不清楚。该预测的精度为88%,并且在EPB特异性时空尺度上的性能很好。这证明了XGBoost方法能够成功捕获群EPB的气候和每日变异性。由于电离层内的局部和随机特征,捕获每日方差长期以来一直逃避研究人员。我们利用Shapley值来解释该模型并深入了解EPB的物理学。我们发现,随着太阳能速度的增加,EPB的概率降低。我们还确定了EPB概率周围的尖峰。这两个见解直接源自XGBoost和Shapley技术。
translated by 谷歌翻译
代理商必须连续监视其伴侣的情感状态,以了解和参与社交互动。但是,评估情感识别的方法不能说明在情感状态之间的阻塞或过渡期间可能发生的分类绩效的变化。本文解决了在婴儿机器人相互作用的背景下影响分类表现的时间模式,在这种情况下,婴儿的情感状态有助于他们参与治疗性腿部运动活动的能力。为了支持视频记录中面部遮挡的鲁棒性,我们训练了婴儿使用面部和身体功能的识别分类器。接下来,我们对表现最佳模型进行了深入的分析,以评估随着模型遇到丢失的数据和不断变化的婴儿影响,性能如何随时间变化。在高度信心提取功能的时间窗口期间,经过训练的面部功能的单峰模型与在面部和身体特征训练的多模式模型相同的最佳性能。但是,在整个数据集上评估时,多模型模型的表现优于单峰模型。此外,在预测情感状态过渡并在对同一情感状态进行多个预测后改善时,模型性能是最弱的。这些发现强调了将身体特征纳入婴儿的连续影响识别的好处。我们的工作强调了随着时间的流逝和在存在丢失的数据的存在时,评估模型性能变异性的重要性。
translated by 谷歌翻译